Goto

Collaborating Authors

 religious group


Llms, Virtual Users, and Bias: Predicting Any Survey Question Without Human Data

Sinacola, Enzo, Pachot, Arnault, Petit, Thierry

arXiv.org Artificial Intelligence

Large Language Models (LLMs) offer a promising alternative to traditional survey methods, potentially enhancing efficiency and reducing costs. In this study, we use LLMs to create virtual populations that answer survey questions, enabling us to predict outcomes comparable to human responses. We evaluate several LLMs-including GPT-4o, GPT-3.5, Claude 3.5-Sonnet, and versions of the Llama and Mistral models-comparing their performance to that of a traditional Random Forests algorithm using demographic data from the World Values Survey (WVS). LLMs demonstrate competitive performance overall, with the significant advantage of requiring no additional training data. However, they exhibit biases when predicting responses for certain religious and population groups, underperforming in these areas. On the other hand, Random Forests demonstrate stronger performance than LLMs when trained with sufficient data. We observe that removing censorship mechanisms from LLMs significantly improves predictive accuracy, particularly for underrepresented demographic segments where censored models struggle. These findings highlight the importance of addressing biases and reconsidering censorship approaches in LLMs to enhance their reliability and fairness in public opinion research.


Measuring Spiritual Values and Bias of Large Language Models

Liu, Songyuan, Zhang, Ziyang, Yan, Runze, Wu, Wei, Yang, Carl, Lu, Jiaying

arXiv.org Artificial Intelligence

Large language models (LLMs) have become integral tool for users from various backgrounds. LLMs, trained on vast corpora, reflect the linguistic and cultural nuances embedded in their pre-training data. However, the values and perspectives inherent in this data can influence the behavior of LLMs, leading to potential biases. As a result, the use of LLMs in contexts involving spiritual or moral values necessitates careful consideration of these underlying biases. Our work starts with verification of our hypothesis by testing the spiritual values of popular LLMs. Experimental results show that LLMs' spiritual values are quite diverse, as opposed to the stereotype of atheists or secularists. We then investigate how different spiritual values affect LLMs in social-fairness scenarios e.g., hate speech identification). Our findings reveal that different spiritual values indeed lead to different sensitivity to different hate target groups. Furthermore, we propose to continue pre-training LLMs on spiritual texts, and empirical results demonstrate the effectiveness of this approach in mitigating spiritual bias.


Bridging or Breaking: Impact of Intergroup Interactions on Religious Polarization

Chaturvedi, Rochana, Chaturvedi, Sugat, Zheleva, Elena

arXiv.org Artificial Intelligence

While exposure to diverse viewpoints may reduce polarization, it can also have a backfire effect and exacerbate polarization when the discussion is adversarial. Here, we examine the question whether intergroup interactions around important events affect polarization between majority and minority groups in social networks. We compile data on the religious identity of nearly 700,000 Indian Twitter users engaging in COVID-19-related discourse during 2020. We introduce a new measure for an individual's group conformity based on contextualized embeddings of tweet text, which helps us assess polarization between religious groups. We then use a meta-learning framework to examine heterogeneous treatment effects of intergroup interactions on an individual's group conformity in the light of communal, political, and socio-economic events. We find that for political and social events, intergroup interactions reduce polarization. This decline is weaker for individuals at the extreme who already exhibit high conformity to their group. In contrast, during communal events, intergroup interactions can increase group conformity. Finally, we decompose the differential effects across religious groups in terms of emotions and topics of discussion. The results show that the dynamics of religious polarization are sensitive to the context and have important implications for understanding the role of intergroup interactions.


PACO: Provocation Involving Action, Culture, and Oppression

Garg, Vaibhav, Xu, Ganning, Singh, Munindar P.

arXiv.org Artificial Intelligence

In India, people identify with a particular group based on certain attributes such as religion. The same religious groups are often provoked against each other. Previous studies show the role of provocation in increasing tensions between India's two prominent religious groups: Hindus and Muslims. With the advent of the Internet, such provocation also surfaced on social media platforms such as WhatsApp. By leveraging an existing dataset of Indian WhatsApp posts, we identified three categories of provoking sentences against Indian Muslims. Further, we labeled 7,000 sentences for three provocation categories and called this dataset PACO. We leveraged PACO to train a model that can identify provoking sentences from a WhatsApp post. Our best model is fine-tuned RoBERTa and achieved a 0.851 average AUC score over five-fold cross-validation. Automatically identifying provoking sentences could stop provoking text from reaching out to the masses, and can prevent possible discrimination or violence against the target religious group. Further, we studied the provocative speech through a pragmatic lens, by identifying the dialog acts and impoliteness super-strategies used against the religious group.


Explained: Why Artificial Intelligence's religious biases are worrying

#artificialintelligence

It has come to a point where artificial intelligence is also being used to enhance creativity. You give a phrase or two written by a human to a language model based on an AI and it can add on more phrases that sound uncannily human-like. They can be great collaborators for anyone trying to write a novel or a poem. Newsletter Click to get the day's best explainers in your inbox However, things aren't as simple as it seems. And the complexity rises owing to biases that come with artificial intelligence.


The Church of AI is dead… so what's next for robots and religion?

#artificialintelligence

The Way of the Future, a church founded by a former Google and Uber engineer, is now a thing of the past. It's been a few months since the world's first AI-focused church shuttered its digital doors, and it doesn't look like its founder has any interest in a revival. But it's a pretty safe bet we'll be seeing more robo-centric religious groups in the future. Perhaps, however, they won't be about worshipping the machines themselves. The world's first AI church "The Way of the Future," was the brainchild of Anthony Levandowski, a former autonomous vehicle developer who was convicted on 33 counts of theft and attempted theft of trade secrets. In the wake of his conviction, Levandowski was sentenced to 18 months in prison but his sentence was delayed due to COVID and, before he could be ordered to serve it, former president Donald Trump pardoned him.


Is nanotechnology going to send us all to hell? Philip Ball

AITopics Original Links

What does God think of nanotechnology? The glib answer is that, like the rest of us, he's only just heard of it. If you think it's a silly question anyway, consider that a 2009 study claimed "religiosity is the dominant predictor of moral acceptance of nanotechnology". Science anthropologist Chris Toumey has recently surveyed this moral landscape. Nanotechnology is a catch-all term for a host of diverse efforts to manipulate matter on the scales of atoms and cells.